Goto

Collaborating Authors

 model-powered conditional independence test


Reviews: Model-Powered Conditional Independence Test

Neural Information Processing Systems

This paper proposed a model powered approach to conduct conditional independent tests for iid data. The basic idea is to use nearest neighbor bootstrap to generate samples which follow a distribution close to the f {CI} and a classifier is trained and tested to see if it is able to distinguish the observation distribution and the nearest neighbor bootstrapped distribution. If the classification performance is close to the random guess, one fails to reject the null hypothesis that data follows conditional independence otherwise one accept the null hypothesis. In general, the paper is trying to address an important problem and the paper is presented in a clear way. It seems that the whole method can be decoupled into two major components.


Model-Powered Conditional Independence Test

Sen, Rajat, Suresh, Ananda Theertha, Shanmugam, Karthikeyan, Dimakis, Alexandros G., Shakkottai, Sanjay

Neural Information Processing Systems

We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables. Given i.i.d samples from the joint distribution $f(x,y,z)$ of continuous random vectors $X,Y$ and $Z,$ we determine whether $X \independent Y \vert Z$. We approach this by converting the conditional independence test into a classification problem. This allows us to harness very powerful classifiers like gradient-boosted trees and deep neural networks. These models can handle complex probability distributions and allow us to perform significantly better compared to the prior state of the art, for high-dimensional CI testing.